99 research outputs found

    Perception of Virtual Audiences

    Get PDF
    International audienc

    Expressing social attitudes in virtual agents for social training games

    Full text link
    The use of virtual agents in social coaching has increased rapidly in the last decade. In order to train the user in different situations than can occur in real life, the virtual agent should be able to express different social attitudes. In this paper, we propose a model of social attitudes that enables a virtual agent to reason on the appropriate social attitude to express during the interaction with a user given the course of the interaction, but also the emotions, mood and personality of the agent. Moreover, the model enables the virtual agent to display its social attitude through its non-verbal behaviour. The proposed model has been developed in the context of job interview simulation. The methodology used to develop such a model combined a theoretical and an empirical approach. Indeed, the model is based both on the literature in Human and Social Sciences on social attitudes but also on the analysis of an audiovisual corpus of job interviews and on post-hoc interviews with the recruiters on their expressed attitudes during the job interview

    Affect-LM: A Neural Language Model for Customizable Affective Text Generation

    Full text link
    Human verbal communication includes affective messages which are conveyed through use of emotionally colored words. There has been a lot of research in this direction but the problem of integrating state-of-the-art neural language models with affective information remains an area ripe for exploration. In this paper, we propose an extension to an LSTM (Long Short-Term Memory) language model for generating conversational text, conditioned on affect categories. Our proposed model, Affect-LM enables us to customize the degree of emotional content in generated sentences through an additional design parameter. Perception studies conducted using Amazon Mechanical Turk show that Affect-LM generates naturally looking emotional sentences without sacrificing grammatical correctness. Affect-LM also learns affect-discriminative word representations, and perplexity experiments show that additional affective information in conversational text can improve language model prediction

    Investigating the Physiological Responses to Virtual Audience Behavioral Changes A Stress-Aware Audience for Public Speaking Training

    Get PDF
    International audienceVirtual audiences have been used in psychotherapy for the treatment of public speaking anxiety, and recent studies show promising results with patients undergoing cognitive-behavior therapy with virtual reality exposure maintaining a reduction in their anxiety disorder for a year after treatment. It has been shown virtual exhibiting positive or negative behavior trigger different stress responses, however research on the topic of the effect of virtual audience behaviors has been scarce. In particular, it is unclear how variations in audience behavior can make the user's stress levels vary while they are presenting. In this paper, we present a study where we intend to investigate the relationship between virtual audience behaviors and physiological measurements of stress. We use the Cicero virtual audience framework which allows for precise manipulation of its perceived level of arousal and valence by in-cremental changes in individual audience members behaviors. Additionally , we introduce a concept of a stress-aware virtual audience for public speaking training, which uses physiological assessments and virtual audience stimuli to maintain the user in a challenging, non-threatening state

    Suggestions for Extending SAIBA with the VIB Platform

    Get PDF
    International audienc

    It’s not Just What You Do but also When You Do It: Novel Perspectives for Informing Interactive Public Speaking Training

    Get PDF
    Most of the emerging public speaking training systems, while very promising, leverage temporal-aggregate features, which do not take into account the structure of the speech. In this paper, we take a different perspective, testing whether some well-known socio-cognitive theories, like first impressions or primacy and recency effect, apply in the distinct context of public speaking perception. We investigated the impact of the temporal location of speech slices (i.e., at the beginning, middle or end) on the perception of confidence and persuasiveness of speakers giving online movie reviews (the Persuasive Opinion Multimedia dataset). Results show that, when considering multi-modality, usually the middle part of speech is the most informative. Additional findings also suggest the interest to leverage local interpretability (by computing SHAP values) to provide feedback directly, both at a specific time (what speech part?) and for a specific behaviour modality or feature (what behaviour ?). This is a first step towards the design of more explainable and pedagogical interactive training systems. Such systems could be more efficient by focusing on improving the speaker’s most important behaviour during the most important moments of their performance, and by situating feedback at specific places within the total speech

    Adaptation of the semi-Hertzian method to wheel/rail contact in turnouts

    Get PDF
    A procedure is described in order to assess loads applied on a turnout due to track-train interaction. Co-simulation is used between a finite element method (FEM) model of the turnout and a multibody system (MBS) of the vehicle. Wheel/rail contact forces are computed in the MBS and applied to the rails of the turnout modelled as FEM beams. FEM displacements under the wheel are accounted in the MBS in the next time step. A modification has been applied to the semi-Hertzian (SH) method used to assess wheel/rail forces. This adapted SH method is designed to take in account the relative flexibility of the components of the turnout, like the stock rail and the switch rail. Such parts have their own degree of freedom and may in some extent behave independently: the proposed method takes it in account in the contact search. The co-simulation has been first used in a referenced case-study

    Suggestions for Extending SAIBA with the VIB Platform

    Get PDF

    Vers des Agents Conversationnels Animés dotés d'émotions et d'attitudes sociales

    No full text
    International audienceIn this article, we propose an architecture of a socio-affective Embodied Conversational Agent (ECA). The different computational models of the architecture enable an ECA to express emotions and social attitudes during an interaction with a user. Based on corpora of actors expressing emotions, models have been defined to compute the emotional facial expressions of an ECA and the characteristics of its corporal movements. A user-perceptive approach has been used to design models to define how an ECA should adapt its non-verbal behavior according to the social attitude the ECA wants to display and the behavior of its interlocutor. The emotions and the social attitudes to express are computed by cognitive models presented in this article.Dans cet article, nous proposons une architecture d'un Agent Conversationnel Animé (ACA) socio-affectif. Les différents modèles computationnels sous-jacents à cette architecture, permettant de donner la capacité à un ACA d'exprimer des émotions et des attitudes sociales durant son interaction avec l'utilisateur, sont présentés. A partir de corpus d'individus exprimant des émotions, des modèles permettant de calculer l'expression faciale émotionnelle d'un ACA ainsi que les caractéristiques de ses mouvements du corps ont été définis. Fondés sur une approche centrée sur la perception de l'utilisateur, des modèles permettant de calculer comment un ACA doit adapter son comportement non-verbal suivant l'attitude sociale qu'il souhaite exprimer et suivant le comportement de son interlocuteur ont été construits. Le calcul des émotions et des attitudes sociales à exprimer est réalisé par des modèles cognitifs présentés dans cet article
    corecore